skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Furlani, Thomas"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The ability to apply data-driven design principles to customize new CI investment to best serve the intended community as well as provide fact-based justification for its need is critical given the important role it plays in research and economic development and its high cost. Here we describe a data driven approach to CI sys- tem design based on workload analyses obtained using the popular open-source CI management tool Open XDMoD, and how it was leveraged in a procurement to provide end-users with an additional 5.6 million CPU hours annually, with subsequent procurements following similar design goals. In addition to system design, we demonstrate Open XDMoD’s utility in providing fact-based justifi- cation for the CI procurement through usage metrics of existing CI resources. 
    more » « less
    Free, publicly-accessible full text available July 18, 2026
  2. This work presents a framework for estimating job wait times in High-Performance Computing (HPC) scheduling queues, leverag- ing historical job scheduling data and real-time system metrics. Using machine learning techniques, specifically Random Forest and Multi-Layer Perceptron (MLP) models, we demonstrate high accuracy in predicting wait times, achieving 94.2% reliability within a 10-minute error margin. The framework incorporates key fea- tures such as requested resources, queue occupancy, and system utilization, with ablation studies revealing the significance of these features. Additionally, the framework offers users wait time esti- mates for different resource configurations, enabling them to select optimal resources, reduce delays, and accelerate computational workloads. Our approach provides valuable insights for both users and administrators to optimize job scheduling, contributing to more efficient resource management and faster time to scientific results. 
    more » « less
    Free, publicly-accessible full text available July 18, 2026
  3. ACCESS is a program established and funded by the National Sci- ence Foundation to help researchers and educators use the NSF na- tional advanced computing systems and services. Here we present an analysis of the usage of ACCESS allocated cyberinfrastructure over the first 16 months of the ACCESS program, September 2022 through December 2023. For historical context, we include analyses of ACCESS and XSEDE, its NSF funded predecessor, for the ten-year period from January 2014 through December 2023. The analyses in- clude batch compute resource usage, cloud resource usage, science gateways, allocations, and users. 
    more » « less
  4. The engineering samples of the NVIDIA Grace CPU Superchip and NVIDIA Grace Hopper Superchips were tested using different benchmarks and scientific applications. The benchmarks include HPCC and HPCG. The real application-based benchmark includes AI-Benchmark-Alpha (a TensorFlow benchmark), Gromacs, OpenFOAM, and ROMS. The performance was compared to multiple Intel, AMD, ARM CPUs and several x86 with NVIDIA GPU systems. A brief energy efficiency estimate was performed based on TDP values. We found that in HPCC benchmark tests, the per-core performance of Grace is similar to or faster than AMD Milan cores, and the high core count often allows NVIDIA Grace CPU Superchip to have per-node performance similar to Intel Sapphire Rapids with High Bandwidth Memory: slower in matrix multiplication (by 17%) and FFT (by 6%), faster in Linpack (by 9%)). In scientific applications, the NVIDIA Grace CPU Superchip performance is slower by 6% to 18% in Gromacs, faster by 7% in OpenFOAM, and right between HBM and DDR modes of Intel Sapphire Rapids in ROMS. The combined CPU-GPU performance in Gromacs is significantly faster (by 20% to 117% faster) than any tested x86-NVIDIA GPU system. Overall, the new NVIDIA Grace Hopper Superchip and NVIDIA Grace CPU Superchip Superchip are high-performance and most likely energy-efficient solutions for HPC centers. 
    more » « less